Goto

Collaborating Authors

 social practice


Cultural Incongruencies in Artificial Intelligence

Prabhakaran, Vinodkumar, Qadri, Rida, Hutchinson, Ben

arXiv.org Artificial Intelligence

Artificial intelligence (AI) systems attempt to imitate human behavior. How well they do this imitation is often used to assess their utility and to attribute human-like (or artificial) intelligence to them. However, most work on AI refers to and relies on human intelligence without accounting for the fact that human behavior is inherently shaped by the cultural contexts they are embedded in, the values and beliefs they hold, and the social practices they follow. Additionally, since AI technologies are mostly conceived and developed in just a handful of countries, they embed the cultural values and practices of these countries. Similarly, the data that is used to train the models also fails to equitably represent global cultural diversity. Problems therefore arise when these technologies interact with globally diverse societies and cultures, with different values and interpretive practices. In this position paper, we describe a set of cultural dependencies and incongruencies in the context of AI-based language and vision technologies, and reflect on the possibilities of and potential strategies towards addressing these incongruencies.


Making artificial intelligence understandable: Constructing explanation processes

#artificialintelligence

Sifting through job applications, analyzing X-ray images, suggesting a new track list--interaction between humans and machines has become an integral part of modern life. The basis for these artificial intelligence (AI) processes is algorithmic decision-making. However, as these are generally difficult to understand, they often prove less useful than anticipated. Researchers at Paderborn and Bielefeld University are hoping to change this, and are discussing how the explainability of artificial intelligence can be improved and adapted to the needs of human users. Their work has recently been published in the respected journal IEEE Transactions on Cognitive and Developmental Systems.


Modelling Human Routines: Conceptualising Social Practice Theory for Agent-Based Simulation

Mercuur, Rijk, Dignum, Virginia, Jonker, Catholijn M.

arXiv.org Artificial Intelligence

Our routines play an important role in a wide range of social challenges such as climate change, disease outbreaks and coordinating staff and patients in a hospital. To use agent-based simulations (ABS) to understand the role of routines in social challenges we need an agent framework that integrates routines. This paper provides the domain-independent Social Practice Agent (SoPrA) framework that satisfies requirements from the literature to simulate our routines. By choosing the appropriate concepts from the literature on agent theory, social psychology and social practice theory we ensure SoPrA correctly depicts current evidence on routines. By creating a consistent, modular and parsimonious framework suitable for multiple domains we enhance the usability of SoPrA. SoPrA provides ABS researchers with a conceptual, formal and computational framework to simulate routines and gain new insights into social systems.


Incorporating social practices in BDI agent systems

Cranefield, Stephen, Dignum, Frank

arXiv.org Artificial Intelligence

When agents interact with humans, either through embodied agents or because they are embedded in a robot, it would be easy if they could use fixed interaction protocols as they do with other agents. However, people do not keep fixed protocols in their day-to-day interactions and the environments are often dynamic, making it impossible to use fixed protocols. Deliberating about interactions from fundamentals is not very scalable either, because in that case all possible reactions of a user have to be considered in the plans. In this paper we argue that social practices can be used as an inspiration for designing flexible and scalable interaction mechanisms that are also robust. However, using social practices requires extending the traditional BDI deliberation cycle to monitor landmark states and perform expected actions by leveraging existing plans. We define and implement this mechanism in Jason using a periodically run meta-deliberation plan, supported by a metainterpreter, and illustrate its use in a realistic scenario.


Interactions as Social Practices: towards a formalization

Dignum, Frank

arXiv.org Artificial Intelligence

Multi-agent models are a suitable starting point to model complex social interactions. However, as the complexity of the systems increase, we argue that novel modeling approaches are needed that can deal with inter-dependencies at different levels of society, where many heterogeneous parties (software agents, robots, humans) are interacting and reacting to each other. In this paper, we present a formalization of a social framework for agents based in the concept of Social Practices as high level specifications of normal (expected) behavior in a given social context. We argue that social practices facilitate the practical reasoning of agents in standard social interactions.


IEEE Xplore Abstract - Versu—A Simulationist Storytelling System

#artificialintelligence

Versu is a text-based simulationist interactive drama. Because it uses autonomous agents, the drama is highly replayable: you can play the same story from multiple perspectives, or assign different characters to the various roles. The architecture relies on the notion of a social practice to achieve coordination between the independent autonomous agents. A social practice describes a recurring social situation, and is a successor to the Schankian script. Social practices are implemented as reactive joint plans, providing affordances to the agents who participate in them.


Social Play in Non-Player Character Dialog

Treanor, Mike (American University) | McCoy, Josh (American University) | Sullivan, Anne (American University)

AAAI Conferences

Non-player characters in games generally lack believability and deep interactivity. The AI system Comme il Faut begins to tackle this by modeling social state and behaviors for game characters. The player initiates social exchanges and the dialog and outcome are generated and displayed in their entirety. In this paper we present a model called social prac-tices to extend Comme il Faut. Social practices increase the playability of social play by modeling social interactions at a more granular level and adding interactivity at each stage. This model also moves away from dialog trees to a more modular form of authoring to support the additional com-plexity.